17 research outputs found

    Report on Summit Meeting for Planning a Coalition of Digital Humanities Centers

    Get PDF
    This report documents the national summit meeting of digital humanities centers and major funders, which took place on April 12th and 13th at NEH headquarters, in Washington, D.C. The purpose of the meeting was to take seriously the ACLS Cyberinfrastructure Commission’s call for digital humanities centers to become key nodes of cyberinfrastructure in the United States. The summit was especially concerned with assessing the value of and the desire for greater collaboration and communication among the centers; among the funders; and between both groups. The NEH and the summit steering committee invited participants from a representative group and range of 17 digital humanities centers, as well as 14 key funders of the field, including NEH, Mellon, NSF, IMLS, ACLS, the Getty, and the Luce, Macarthur, and Sloane Foundations, plus two representatives from Google

    Music Theatre Online [Final Report]

    Get PDF
    Music Theatre Online (originally the Electronic Broadway Project) was launched in March of 2010 to coincide with the launch of the Glory Days cast album by Ghostlight Records. The announcement of the site was frequently tweeted and picked up by the web-based crowd-edited digital humanities publication, DH Now. The annotation code has been adapted by the Chymistry of Issaac Newton project at Indiana University, the audio linking tool has been used for a Danny Kaye web exhibit currently being developed at the Library of Congress, and the project was the basis for a new grant application by the project staff to Scholarly Editions Program at the NEH. Although that application was not funded, grant reviewers acknowledged that the project staff had demonstrated technical expertise and remarked that “It is attractive that the project materials are to be made available free of charge on the web in the Music Theater Online archive.” The site was widely praised by the Twitter-verse, and has been visited by users from around the globe (including England, Australia, and Japan)

    Open Annotation Collaboration Phase II Demonstration Experiments: Case Study Report

    Get PDF
    This report presents results from a case study, conducted as part of Phase II of the Open Annotation Collaboration (OAC), examining nine annotation demonstration experiments and associated use cases. During Phase II, the OAC actively developed and experimented with an RDF-based annotation data model. The primary features of the data model were developed in response to findings in Phase I and evolved during the course of Phase II based on feedback from the demonstration experiments, community discussions, and face-to-face meetings. The case study was based primarily on interviews conducted with project developers and user groups, supplemented with information from final reports submitted by the participating projects

    Open Annotation - Annotation Ontology Data Model Reconciliation (A Supplement to the Open Annotation Collaboration, Phase II) [Final Report]

    Get PDF
    Following up on a meeting held in September 2011 that focused on the synergisms and common interests spanning both the Annotation Ontology1 and Open Annotation Collaboration2 projects and communities, the projects jointly moved forward in December 2011 to establish a W3C Community Group3 to facilitate the reconciliation and merger of their respective annotation interoperability data models and vocabularies. In January 2012 the Andrew W. Mellon Foundation then awarded the University of Illinois (on behalf of the Open Annotation Collaboration) a grant of $29,500, providing the additional wherewithal required to realize this goal. We report here on the results of this grant

    From Bitstreams to Heritage: Putting Digital Forensics into Practice in Collecting Institutions

    Get PDF
    This paper examines the application of digital forensics methods to materials in collecting institutions – particularly libraries, archives and museums. It discusses motivations, challenges, and emerging strategies for the use of these technologies and workflows. It is a product of the BitCurator project. The BitCurator project began on October 1, 2011, through funding from the Andrew W. Mellon Foundation. BitCurator is an effort to build, test, and analyze systems and software for incorporating digital forensics methods into the workflows of a variety of collecting institutions. It is led by the School of Information and Library Science (SILS) at the University of North Carolina, Chapel Hill and the Maryland Institute for Technology in the Humanities (MITH) at the University of Maryland, and involves contributors from several other institutions. Two groups of external partners are contributing to this process: a Professional Expert Panel (PEP) of individuals who are at various levels of implementing digital forensics tools and methods in their collecting institution contexts, and a Development Advisory Group (DAG) of individuals who have significant experience with software development.2 This paper is a product of phase one of BitCurator (October 1, 2011 – September 30, 2013). The second phase of the project (October 1, 2013 – September 29, 2014) continues the development of the BitCurator environment, along with expanded professional engagement and community outreach activities

    Approaches to Managing and Collecting Born-Digital Literary Materials for Scholarly Use

    Get PDF
    Digital Humanities Level 1 Start-Up funding ($11,708) was received in support of a series of site visits and planning meetings for personnel working with the born-digital components of three significant collections of literary material: the Salman Rushdie papers at Emory University’s Manuscripts, Archives, and Rare Books Library (MARBL), the Michael Joyce Papers (and other collections) at the Harry Ransom Humanities Research Center at The University of Texas at Austin, and the Deena Larsen Collection at the Maryland Institute for Technology in the Humanities (MITH) at the University of Maryland. The meetings and site visits were undertaken with the two-fold objective of exchanging knowledge amongst the still relatively small community of practitioners engaged in such efforts, and facilitating the preparation of a larger collaborative project proposal aimed at preserving and accessing the born-digital documents and records of contemporary authorship. The grant period was September 2008-March 2009. The only specified deliverable was this white paper; however, as the Outcomes and Next Steps sections (below) suggest, a small initial investment by NEH has yielded significant benefit in the form of infrastructure, knowledge sharing, and future collaboration

    Preserving Virtual Worlds Final Report

    Get PDF
    The Preserving Virtual Worlds project is a collaborative research venture of the Rochester Institute of Technology, Stanford University, the University of Maryland, the University of Illinois at Urbana-Champaign and Linden Lab, conducted as part of Preserving Creative America, an initiative of the National Digital Information Infrastructure and Preservation Program at the Library of Congress. The primary goals of our project have been to investigate issues surrounding the preservation of video games and interactive fiction through a series of case studies of games and literature from various periods in computing history, and to develop basic standards for metadata and content representation of these digital artifacts for long-term archival storage

    Shared Horizons: Data, BioMedicine, and the Digital Humanities (Final Performance Report)

    No full text
    The Maryland Institute for Technology in the Humanities (MITH), working in cooperation with the Office of Digital Humanities of the National Endowment for the Humanities, the National Library of Medicine of the National Institutes for Health, and the Research Councils UK, hosted a two-day symposium, April 9-11, 2013. The symposium had three main goals: (1) to address questions about collaboration, research methodologies, and the interpretation of evidence arising from the interdisciplinary opportunities in this burgeoning area of biomedical-driven humanities scholarship; (2) to investigate the current state of the field; and (3) to facilitate future research collaborations between the humanities and biomedical sciences. Awarded via a National Endowment for the Humanities Chairman’s Cooperative Agreement, “MITH-NEH-NLM Genomics Workshop” (renamed in-house to the more descriptive title of “Shared Horizons: Data, BioMedicine, and the Digital Humanities”) explored collaboration, research methodologies, and the interpretation of evidence arising from the interdisciplinary opportunities in this burgeoning area of biomedical-driven humanities scholarship. Shared Horizons created opportunities for disciplinary cross-fertilization through a mix of formal and informal presentations combined with breakout sessions, all designed to promote a rich exchange of ideas about how large-scale quantitative methods can lead to new understandings of human culture. Bringing together researchers from the digital humanities and bioinformatics communities, the symposium explored ways in which these two communities might fruitfully collaborate on projects that bridge the humanities and medicine around the topics of sequence alignment and network analysis, two modes of analysis that intersect with “big data.

    Topic Modeling for Humanities Research

    No full text
    Topic Modeling for Humanities Research, a one-day workshop directed by Assistant Director of MITH Dr. Jennifer Guiliano, received a Level 1 Digital Humanities start up from the National Endowment for the Humanities on April 19, 2011. The workshop facilitated a unique opportunity for cross-fertilization, information exchange, and collaboration between and among humanities scholars and researchers in natural language processing on the subject of topic modeling applications and methods. The workshop was organized into three primary areas: 1) an overview of how topic modeling is currently being used in the humanities; 2) an inventory of extensions of the LDA model that have particular relevance for humanities research questions; and 3) a discussion of software implementations, toolkits, and interfaces. Of particular note in this final review is the completion of all project goals including the workshop itself on November 3, 2012 at the Maryland Institute for Technology in the Humanities (MITH) on the University of Maryland College Park campus.National Endowment for the Humanitie

    MITH API Workshop [White Paper]

    No full text
    Funded by a Level 1 Digital Humanities Start-Up Grant, the MITH API Workshop took place at the University of Maryland, February 25-26th, 2011, gathering digital humanities scholars and developers interested in using Application Programming Interfaces (APIs) with leaders from industry who have designed and deployed web-based APIs for their companies. An API can be informally defined as a set of published commands that computer programmers can use in their own code to interact with code that they did not write and to which they often have only limited access. For example, an API is often provided to allow third party programmers to retrieve data from a repository that they do not control. The value of this for a humanities scholar cannot be understated as APIs allow access to data that may allow them to ask different questions that the application interface allows and/or combine data from disparate repositories into a single archive to illuminate unknown or unacknowledged connections. As such, the purpose of the workshop was to lay groundwork for the integration of APIs into participant-driven digital humanities projects and to serve as a platform to develop future ideas for how to share and access humanities data through APIs
    corecore